403 research outputs found
Error Corrective Boosting for Learning Fully Convolutional Networks with Limited Data
Training deep fully convolutional neural networks (F-CNNs) for semantic image
segmentation requires access to abundant labeled data. While large datasets of
unlabeled image data are available in medical applications, access to manually
labeled data is very limited. We propose to automatically create auxiliary
labels on initially unlabeled data with existing tools and to use them for
pre-training. For the subsequent fine-tuning of the network with manually
labeled data, we introduce error corrective boosting (ECB), which emphasizes
parameter updates on classes with lower accuracy. Furthermore, we introduce
SkipDeconv-Net (SD-Net), a new F-CNN architecture for brain segmentation that
combines skip connections with the unpooling strategy for upsampling. The
SD-Net addresses challenges of severe class imbalance and errors along
boundaries. With application to whole-brain MRI T1 scan segmentation, we
generate auxiliary labels on a large dataset with FreeSurfer and fine-tune on
two datasets with manual annotations. Our results show that the inclusion of
auxiliary labels and ECB yields significant improvements. SD-Net segments a 3D
scan in 7 secs in comparison to 30 hours for the closest multi-atlas
segmentation method, while reaching similar performance. It also outperforms
the latest state-of-the-art F-CNN models.Comment: Accepted at MICCAI 201
Adversarial Convolutional Networks with Weak Domain-Transfer for Multi-sequence Cardiac MR Images Segmentation
Analysis and modeling of the ventricles and myocardium are important in the
diagnostic and treatment of heart diseases. Manual delineation of those tissues
in cardiac MR (CMR) scans is laborious and time-consuming. The ambiguity of the
boundaries makes the segmentation task rather challenging. Furthermore, the
annotations on some modalities such as Late Gadolinium Enhancement (LGE) MRI,
are often not available. We propose an end-to-end segmentation framework based
on convolutional neural network (CNN) and adversarial learning. A dilated
residual U-shape network is used as a segmentor to generate the prediction
mask; meanwhile, a CNN is utilized as a discriminator model to judge the
segmentation quality. To leverage the available annotations across modalities
per patient, a new loss function named weak domain-transfer loss is introduced
to the pipeline. The proposed model is evaluated on the public dataset released
by the challenge organizer in MICCAI 2019, which consists of 45 sets of
multi-sequence CMR images. We demonstrate that the proposed adversarial
pipeline outperforms baseline deep-learning methods.Comment: 9 pages, 4 figures, conferenc
Improving the Segmentation of Anatomical Structures in Chest Radiographs using U-Net with an ImageNet Pre-trained Encoder
Accurate segmentation of anatomical structures in chest radiographs is
essential for many computer-aided diagnosis tasks. In this paper we investigate
the latest fully-convolutional architectures for the task of multi-class
segmentation of the lungs field, heart and clavicles in a chest radiograph. In
addition, we explore the influence of using different loss functions in the
training process of a neural network for semantic segmentation. We evaluate all
models on a common benchmark of 247 X-ray images from the JSRT database and
ground-truth segmentation masks from the SCR dataset. Our best performing
architecture, is a modified U-Net that benefits from pre-trained encoder
weights. This model outperformed the current state-of-the-art methods tested on
the same benchmark, with Jaccard overlap scores of 96.1% for lung fields, 90.6%
for heart and 85.5% for clavicles.Comment: Presented at the First International Workshop on Thoracic Image
Analysis (TIA), MICCAI 201
3DQ: Compact Quantized Neural Networks for Volumetric Whole Brain Segmentation
Model architectures have been dramatically increasing in size, improving
performance at the cost of resource requirements. In this paper we propose 3DQ,
a ternary quantization method, applied for the first time to 3D Fully
Convolutional Neural Networks (F-CNNs), enabling 16x model compression while
maintaining performance on par with full precision models. We extensively
evaluate 3DQ on two datasets for the challenging task of whole brain
segmentation. Additionally, we showcase our method's ability to generalize on
two common 3D architectures, namely 3D U-Net and V-Net. Outperforming a variety
of baselines, the proposed method is capable of compressing large 3D models to
a few MBytes, alleviating the storage needs in space critical applications.Comment: Accepted to MICCAI 201
Fully Automatic and Real-Time Catheter Segmentation in X-Ray Fluoroscopy
Augmenting X-ray imaging with 3D roadmap to improve guidance is a common
strategy. Such approaches benefit from automated analysis of the X-ray images,
such as the automatic detection and tracking of instruments. In this paper, we
propose a real-time method to segment the catheter and guidewire in 2D X-ray
fluoroscopic sequences. The method is based on deep convolutional neural
networks. The network takes as input the current image and the three previous
ones, and segments the catheter and guidewire in the current image.
Subsequently, a centerline model of the catheter is constructed from the
segmented image. A small set of annotated data combined with data augmentation
is used to train the network. We trained the method on images from 182 X-ray
sequences from 23 different interventions. On a testing set with images of 55
X-ray sequences from 5 other interventions, a median centerline distance error
of 0.2 mm and a median tip distance error of 0.9 mm was obtained. The
segmentation of the instruments in 2D X-ray sequences is performed in a
real-time fully-automatic manner.Comment: Accepted to MICCAI 201
Modeling Camera Effects to Improve Visual Learning from Synthetic Data
Recent work has focused on generating synthetic imagery to increase the size
and variability of training data for learning visual tasks in urban scenes.
This includes increasing the occurrence of occlusions or varying environmental
and weather effects. However, few have addressed modeling variation in the
sensor domain. Sensor effects can degrade real images, limiting
generalizability of network performance on visual tasks trained on synthetic
data and tested in real environments. This paper proposes an efficient,
automatic, physically-based augmentation pipeline to vary sensor effects
--chromatic aberration, blur, exposure, noise, and color cast-- for synthetic
imagery. In particular, this paper illustrates that augmenting synthetic
training datasets with the proposed pipeline reduces the domain gap between
synthetic and real domains for the task of object detection in urban driving
scenes
Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks
Fluorescence microscopy images usually show severe anisotropy in axial versus
lateral resolution. This hampers downstream processing, i.e. the automatic
extraction of quantitative biological data. While deconvolution methods and
other techniques to address this problem exist, they are either time consuming
to apply or limited in their ability to remove anisotropy. We propose a method
to recover isotropic resolution from readily acquired anisotropic data. We
achieve this using a convolutional neural network that is trained end-to-end
from the same anisotropic body of data we later apply the network to. The
network effectively learns to restore the full isotropic resolution by
restoring the image under a trained, sample specific image prior. We apply our
method to synthetic and real datasets and show that our results improve
on results from deconvolution and state-of-the-art super-resolution techniques.
Finally, we demonstrate that a standard 3D segmentation pipeline performs on
the output of our network with comparable accuracy as on the full isotropic
data
A Deep Cascade of Convolutional Neural Networks for MR Image Reconstruction
The acquisition of Magnetic Resonance Imaging (MRI) is inherently slow.
Inspired by recent advances in deep learning, we propose a framework for
reconstructing MR images from undersampled data using a deep cascade of
convolutional neural networks to accelerate the data acquisition process. We
show that for Cartesian undersampling of 2D cardiac MR images, the proposed
method outperforms the state-of-the-art compressed sensing approaches, such as
dictionary learning-based MRI (DLMRI) reconstruction, in terms of
reconstruction error, perceptual quality and reconstruction speed for both
3-fold and 6-fold undersampling. Compared to DLMRI, the error produced by the
method proposed is approximately twice as small, allowing to preserve
anatomical structures more faithfully. Using our method, each image can be
reconstructed in 23 ms, which is fast enough to enable real-time applications
Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation
Automatic brain tumor segmentation plays an important role for diagnosis,
surgical planning and treatment assessment of brain tumors. Deep convolutional
neural networks (CNNs) have been widely used for this task. Due to the
relatively small data set for training, data augmentation at training time has
been commonly used for better performance of CNNs. Recent works also
demonstrated the usefulness of using augmentation at test time, in addition to
training time, for achieving more robust predictions. We investigate how
test-time augmentation can improve CNNs' performance for brain tumor
segmentation. We used different underpinning network structures and augmented
the image by 3D rotation, flipping, scaling and adding random noise at both
training and test time. Experiments with BraTS 2018 training and validation set
show that test-time augmentation helps to improve the brain tumor segmentation
accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201
- …